We propose a layered hierarchical architecture called UCLA (Universal Causality Layered Architecture), which combines multiple levels of categorical abstraction for causal inference. At the top-most level, causal interventions are modeled combinatorially using a simplicial category of ordinal numbers. At the second layer, causal models are defined by a graph-type category. The non-random ``surgical" operations on causal structures, such as edge deletion, are captured using degeneracy and face operators from the simplicial layer above. The third categorical abstraction layer corresponds to the data layer in causal inference. The fourth homotopy layer comprises of additional structure imposed on the instance layer above, such as a topological space, which enables evaluating causal models on datasets. Functors map between every pair of layers in UCLA. Each functor between layers is characterized by a universal arrow, which defines an isomorphism between every pair of categorical layers. These universal arrows define universal elements and representations through the Yoneda Lemma, and in turn lead to a new category of elements based on a construction introduced by Grothendieck. Causal inference between each pair of layers is defined as a lifting problem, a commutative diagram whose objects are categories, and whose morphisms are functors that are characterized as different types of fibrations. We illustrate the UCLA architecture using a range of examples, including integer-valued multisets that represent a non-graphical framework for conditional independence, and causal models based on graphs and string diagrams using symmetric monoidal categories. We define causal effect in terms of the homotopy colimit of the nerve of the category of elements.
translated by 谷歌翻译
Consider two brands that want to jointly test alternate web experiences for their customers with an A/B test. Such collaborative tests are today enabled using \textit{third-party cookies}, where each brand has information on the identity of visitors to another website. With the imminent elimination of third-party cookies, such A/B tests will become untenable. We propose a two-stage experimental design, where the two brands only need to agree on high-level aggregate parameters of the experiment to test the alternate experiences. Our design respects the privacy of customers. We propose an estimater of the Average Treatment Effect (ATE), show that it is unbiased and theoretically compute its variance. Our demonstration describes how a marketer for a brand can design such an experiment and analyze the results. On real and simulated data, we show that the approach provides valid estimate of the ATE with low variance and is robust to the proportion of visitors overlapping across the brands.
translated by 谷歌翻译
我们提出了一种统一的形式主义,用于使用高阶类别理论的结构发现因果模型和预测状态表示(PSR)模型(RL)。具体而言,我们使用Simplicial对象将序数字类别的符号函数(违反函数)模拟在两个设置中的结构发现。在条件独立性下等效的因果模型的片段(定义为因果角)以及预测状态表示中潜在测试的子序列 - 定义为预测角 - 都是简单对象的角,是亚集由于去除内部和面对特定顶点的面部而导致的。两种设置中的潜在结构发现都涉及相同的基本数学问题,即通过解决通勤图中的提升问题,并利用定义高阶对称性的弱同质性来查找简单对象的角的扩展。解决“内部”与“外部”喇叭问题的解决方案导致了高阶类别的各种概念,包括弱kan复合物和准游戏。我们根据通用因果模型或通用决策模型及其简单对象表示的类别之间的伴随函数来定义两个设置中结构发现的抽象问题。
translated by 谷歌翻译
有条件的独立性已被广泛用于AI,因果推理,机器学习和统计数据。我们介绍分类生物,这是一种代数结构,用于表征条件独立性的普遍特性。分类物被定义为两个类别的混合体:一个编码由对象和箭头定义的预订的晶格结构;第二个二个参数化涉及定义​​条件独立性结构的三角体对象和形态,桥梁形态提供了二进制和三元结构之间的接口。我们使用公理集的三个众所周知的示例来说明分类生物:绘画,整数价值多组和分离型。 FOUNDOROIDS将一个分类型映射到另一个分类,从而保留了由共同域中所有三种类型的箭头定义的关系。我们描述了跨官能素的自然转化,该函数是跨常规物体和三角形对象的自然变化,以构建条件独立性的通用表示。我们使用分类器之间的辅助和单核,以抽象地表征条件独立性的图形和非图形表示的忠诚。
translated by 谷歌翻译
我们提出了普遍因果关系,这是一个基于类别理论的总体框架,该框架定义了基于因果推理的普遍特性,该属性独立于所使用的基本代表性形式主义。更正式的是,普遍的因果模型被定义为由对象和形态组成的类别,它们代表因果影响,以及进行干预措施(实验)和评估其结果(观察)的结构。函子在类别之间的映射和自然变换映射在相同两个类别的一对函子之间。我们框架中的抽象因果图是使用类别理论的通用构造构建的,包括抽象因果图的限制或共限制,或更普遍的KAN扩展。我们提出了普遍因果推断的两个基本结果。第一个结果称为普遍因果定理(UCT),与图的通用性有关,这些结果被视为函数映射对象和关系从抽象因果图的索引类别到一个实际因果模型,其节点由随机变量标记为实际因果模型和边缘代表功能或概率关系。 UCT指出,任何因果推论都可以以规范的方式表示为代表对象的抽象因果图的共同限制。 UCT取决于滑轮理论的基本结果。第二个结果是因果繁殖特性(CRP),指出对象x对另一个对象y的任何因果影响都可以表示为两个抽象因果图之间的自然转化。 CRP来自Yoneda引理,这是类别理论中最深层的结果之一。 CRP属性类似于复制元素希尔伯特空间中的繁殖属性,该元素是机器学习中内核方法的基础。
translated by 谷歌翻译
Self-supervised pre-trained transformers have improved the state of the art on a variety of speech tasks. Due to the quadratic time and space complexity of self-attention, they usually operate at the level of relatively short (e.g., utterance) segments. In this paper, we study the use of context, i.e., surrounding segments, during fine-tuning and propose a new approach called context-aware fine-tuning. We attach a context module on top of the last layer of a pre-trained model to encode the whole segment into a context embedding vector which is then used as an additional feature for the final prediction. During the fine-tuning stage, we introduce an auxiliary loss that encourages this context embedding vector to be similar to context vectors of surrounding segments. This allows the model to make predictions without access to these surrounding segments at inference time and requires only a tiny overhead compared to standard fine-tuned models. We evaluate the proposed approach using the SLUE and Librilight benchmarks for several downstream tasks: Automatic speech recognition (ASR), named entity recognition (NER), and sentiment analysis (SA). The results show that context-aware fine-tuning not only outperforms a standard fine-tuning baseline but also rivals a strong context injection baseline that uses neighboring speech segments during inference.
translated by 谷歌翻译
Advancement in large pretrained language models has significantly improved their performance for conditional language generation tasks including summarization albeit with hallucinations. To reduce hallucinations, conventional methods proposed improving beam search or using a fact checker as a postprocessing step. In this paper, we investigate the use of the Natural Language Inference (NLI) entailment metric to detect and prevent hallucinations in summary generation. We propose an NLI-assisted beam re-ranking mechanism by computing entailment probability scores between the input context and summarization model-generated beams during saliency-enhanced greedy decoding. Moreover, a diversity metric is introduced to compare its effectiveness against vanilla beam search. Our proposed algorithm significantly outperforms vanilla beam decoding on XSum and CNN/DM datasets.
translated by 谷歌翻译
Coordinate-based implicit neural networks, or neural fields, have emerged as useful representations of shape and appearance in 3D computer vision. Despite advances however, it remains challenging to build neural fields for categories of objects without datasets like ShapeNet that provide canonicalized object instances that are consistently aligned for their 3D position and orientation (pose). We present Canonical Field Network (CaFi-Net), a self-supervised method to canonicalize the 3D pose of instances from an object category represented as neural fields, specifically neural radiance fields (NeRFs). CaFi-Net directly learns from continuous and noisy radiance fields using a Siamese network architecture that is designed to extract equivariant field features for category-level canonicalization. During inference, our method takes pre-trained neural radiance fields of novel object instances at arbitrary 3D pose, and estimates a canonical field with consistent 3D pose across the entire category. Extensive experiments on a new dataset of 1300 NeRF models across 13 object categories show that our method matches or exceeds the performance of 3D point cloud-based methods.
translated by 谷歌翻译
Cloud computing holds the promise of reduced costs through economies of scale. To realize this promise, cloud computing vendors typically solve sequential resource allocation problems, where customer workloads are packed on shared hardware. Virtual machines (VM) form the foundation of modern cloud computing as they help logically abstract user compute from shared physical infrastructure. Traditionally, VM packing problems are solved by predicting demand, followed by a Model Predictive Control (MPC) optimization over a future horizon. We introduce an approximate formulation of an industrial VM packing problem as an MILP with soft-constraints parameterized by the predictions. Recently, predict-and-optimize (PnO) was proposed for end-to-end training of prediction models by back-propagating the cost of decisions through the optimization problem. But, PnO is unable to scale to the large prediction horizons prevalent in cloud computing. To tackle this issue, we propose the Predict-and-Critic (PnC) framework that outperforms PnO with just a two-step horizon by leveraging reinforcement learning. PnC jointly trains a prediction model and a terminal Q function that approximates cost-to-go over a long horizon, by back-propagating the cost of decisions through the optimization problem \emph{and from the future}. The terminal Q function allows us to solve a much smaller two-step horizon optimization problem than the multi-step horizon necessary in PnO. We evaluate PnO and the PnC framework on two datasets, three workloads, and with disturbances not modeled in the optimization problem. We find that PnC significantly improves decision quality over PnO, even when the optimization problem is not a perfect representation of reality. We also find that hardening the soft constraints of the MILP and back-propagating through the constraints improves decision quality for both PnO and PnC.
translated by 谷歌翻译
Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model $M$. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model $M$ and state transition dataset, we wish to best approximate the system model while being bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified $M$ models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods.
translated by 谷歌翻译